On the Mean of the Second Largest Eigenvalue on the Convergence Rate of Genetic Algorithms

نویسندگان

  • Florian Schmitt
  • Franz Rothlauf
  • Armin Heinzl
چکیده

Genetic algorithms are sometimes disparagingly denoted as just a fancier form of a plain, stupid heuristic. One of the main reasons for this kind of critique is that users believed a GA could not guarantee global convergence in a certain amount of time. Because the proof of global convergence of GAs using elitism has been performed elsewhere (11), in this work we want to extend previous work by J. Suzuki (13) and focus on the identification of the determinants that influence the convergence rate of genetic algorithms. The convergence rate of genetic algorithms is addressed using Markov chain analysis. Therefore, we could describe an elitist GA using mutation, recombination and selection as a discrete stochastic process. Evaluating the eigenvalues of the transition matrix of the Markov chain we can prove that the convergence rate of a GA is determined by the second largest eigenvalue of the transition matrix. The proof is first performed for diagonalizable transition matrices and then transferred to matrices in Jordan normal form. The presented proof allows a more detailed and deeper understanding of the principles of evolutionary search. As an extension to this work we want to encourage researchers to work on proper estimations of the second largest eigenvalue of the transition matrix. With a good approximation, the convergence behavior of GAs could be described more exactly and GAs would be one step ahead on the road to a fast, reliable and widely accepted optimization method.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Sequential and Mixed Genetic Algorithm and Learning Automata (SGALA, MGALA) for Feature Selection in QSAR

Feature selection is of great importance in Quantitative Structure-Activity Relationship (QSAR) analysis. This problem has been solved using some meta-heuristic algorithms such as: GA, PSO, ACO, SA and so on. In this work two novel hybrid meta-heuristic algorithms i.e. Sequential GA and LA (SGALA) and Mixed GA and LA (MGALA), which are based on Genetic algorithm and learning automata for QSAR f...

متن کامل

Sequential and Mixed Genetic Algorithm and Learning Automata (SGALA, MGALA) for Feature Selection in QSAR

Feature selection is of great importance in Quantitative Structure-Activity Relationship (QSAR) analysis. This problem has been solved using some meta-heuristic algorithms such as: GA, PSO, ACO, SA and so on. In this work two novel hybrid meta-heuristic algorithms i.e. Sequential GA and LA (SGALA) and Mixed GA and LA (MGALA), which are based on Genetic algorithm and learning automata for QSAR f...

متن کامل

Two Novel Learning Algorithms for CMAC Neural Network Based on Changeable Learning Rate

Cerebellar Model Articulation Controller Neural Network is a computational model of cerebellum which acts as a lookup table. The advantages of CMAC are fast learning convergence, and capability of mapping nonlinear functions due to its local generalization of weight updating, single structure and easy processing. In the training phase, the disadvantage of some CMAC models is unstable phenomenon...

متن کامل

A New Hybrid Conjugate Gradient Method Based on Eigenvalue Analysis for Unconstrained Optimization Problems

In this paper‎, ‎two extended three-term conjugate gradient methods based on the Liu-Storey ({tt LS})‎ ‎conjugate gradient method are presented to solve unconstrained optimization problems‎. ‎A remarkable property of the proposed methods is that the search direction always satisfies‎ ‎the sufficient descent condition independent of line search method‎, ‎based on eigenvalue analysis‎. ‎The globa...

متن کامل

A Novel Experimental Analysis of the Minimum Cost Flow Problem

In the GA approach the parameters that influence its performance include population size, crossover rate and mutation rate. Genetic algorithms are suitable for traversing large search spaces since they can do this relatively fast and because the mutation operator diverts the method away from local optima, which will tend to become more common as the search space increases in size. GA’s are base...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2001